Skip to content

Conversation

@alexpargon
Copy link

@alexpargon alexpargon commented Oct 5, 2025

What do these changes do?

Added test for the Meta Modeling services:

  • SUMO: Surrogate Model
  • UQ: Uncertainty Quantification
  • MOGA: Multi Objective Genetic Algorithms

to this end, mmux-testids where added to the meta modeling service.

Related issue/s

How to test

The changes include a new test phase which executes e2e playwright tests on the osparc service, by creating a project with JSONIFIER, creating a function from this proyect, and then launching MMuX to populate the function with jobs and use them to test each of the tools MMuX enables

Dev-ops

@codecov
Copy link

codecov bot commented Oct 5, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.57%. Comparing base (0792d8a) to head (ce06cb4).
⚠️ Report is 5 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #8457      +/-   ##
==========================================
+ Coverage   87.23%   87.57%   +0.34%     
==========================================
  Files        1935     2012      +77     
  Lines       76477    79025    +2548     
  Branches     1368     1368              
==========================================
+ Hits        66717    69209    +2492     
- Misses       9354     9410      +56     
  Partials      406      406              
Flag Coverage Δ *Carryforward flag
integrationtests 63.90% <ø> (+0.02%) ⬆️
unittests 86.29% <ø> (+0.37%) ⬆️ Carriedforward from d6d33e2

*This pull request uses carry forward flags. Click here to find out more.

Components Coverage Δ
pkg_aws_library 94.98% <ø> (ø)
pkg_celery_library 83.22% <ø> (ø)
pkg_dask_task_models_library 79.37% <ø> (ø)
pkg_models_library 92.90% <ø> (ø)
pkg_notifications_library 85.20% <ø> (ø)
pkg_postgres_database 87.99% <ø> (ø)
pkg_service_integration 72.76% <ø> (ø)
pkg_service_library 71.00% <ø> (ø)
pkg_settings_library 90.29% <ø> (ø)
pkg_simcore_sdk 84.95% <ø> (ø)
agent 93.10% <ø> (ø)
api_server 91.35% <ø> (ø)
autoscaling 95.83% <ø> (ø)
catalog 92.06% <ø> (ø)
clusters_keeper 99.14% <ø> (ø)
dask_sidecar 91.72% <ø> (ø)
datcore_adapter 97.95% <ø> (ø)
director 75.72% <ø> (ø)
director_v2 90.91% <ø> (+0.01%) ⬆️
dynamic_scheduler 96.66% <ø> (∅)
dynamic_sidecar 90.44% <ø> (ø)
efs_guardian 89.83% <ø> (ø)
invitations 90.90% <ø> (ø)
payments 92.80% <ø> (ø)
resource_usage_tracker 92.32% <ø> (+0.21%) ⬆️
storage 86.64% <ø> (+0.08%) ⬆️
webclient ∅ <ø> (∅)
webserver 87.09% <ø> (+0.09%) ⬆️

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0792d8a...ce06cb4. Read the comment docs.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@mergify
Copy link
Contributor

mergify bot commented Oct 5, 2025

🧪 CI Insights

Here's what we observed from your CI run for ce06cb4.

❌ Job Failures

Pipeline Job Health on master Retries 🔍 CI Insights 📄 Logs
CI unit-tests Broken 0 View View

Copy link
Member

@pcrespov pcrespov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

We usually like to include a bit more context to help us track changes, connect related issues, and coordinate work more easily. It shouldn’t take much time.

A few suggestions:

  • Assign yourself to the issue (I went ahead and did that for you this time).
  • Add the relevant project and labels.
  • Include a short description — you can even use AI for that — and fill out the PR template sections where applicable (e.g. enumerate related PO issues )

@alexpargon alexpargon added the e2e Bugs found by or related to the end-2-end testing label Oct 6, 2025
@alexpargon alexpargon added this to the Cheops milestone Oct 6, 2025

with log_context(logging.INFO, "Waiting for the sampling to complete..."):
plotly_graph = service_iframe.locator(".js-plotly-plot")
plotly_graph.wait_for(state="visible", timeout=300000)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused here, so you wait for the graph to show up? But do you check if the sample created is included?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the function is created specifically for the test, so it's empty and has no jobs, then we launch a sampling campaign which will update the UI when the jobs are ready, as the runner is a jsonifier and we have timeouts, more than five of them consistently finish when the launching is done, and thus the plot is displayed.

the display of the plot can only happen when you have more than 5 jobs listed and finished.

if we want to wait for every job to finish, we can wait longer to do a refresh or implement a refresh mechanism every 10 seconds to wait for all of them to be complete

do you require the testing to also check the execution for failed jobs? we can fail the test if any job fails

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not saying every job has to finish. But I think it would be best to check if the jobs are listed at least?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So i finally replaced the simple graph approach with one that checks the table for the status to turn to complete, and refreshes it until it happens. after that it selects all the new values and waits for the plot to appear.

@alexpargon alexpargon requested a review from wvangeit October 9, 2025 15:48
for i in range(count_min):
input_field = min_inputs.nth(i)
input_field.fill(str(i + 1))
print(f"Filled Min input {i} with value {i + 1}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use logs, not print. Get it from log_context()

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, replaced them!

toast = service_iframe.locator("div.Toastify__toast").filter(
has_text="Sampling started running successfully, please wait for completion."
)
toast.wait_for(state="visible", timeout=120000) # waits up to 120 seconds
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

look at the style of others. We use constants to parametrize key timeouts and therefore tune them later

toast.wait_for(state="visible", timeout=120000) # waits up to 120 seconds

with log_context(logging.INFO, "Waiting for the sampling to complete..."):
moga_container = service_iframe.locator("[mmux-testid='moga-pareto-plot']")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: is this unused on purpose?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replaced this mechanic with a more elaborated approach

page.wait_for_timeout(15000)

# # then we wait a long time
# page.wait_for_timeout(1 * MINUTE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rm comments if not used. No comments with code allowed

elif message_2.is_visible():
message_2.wait_for(state="detached", timeout=300000)
else:
print("No blocking message found — continuing.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log

service_iframe.locator('[mmux-testid="run-sampling-btn"]').click()
page.wait_for_timeout(1000)

with log_context(logging.INFO, "Waiting for the sampling to complete..."):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: There is no need to log everything as a start/stop context. Note that you can also log steps within it using the returned object

        with log_context(
            logging.INFO,
            f"Convert {project_uuid=} / {_STUDY_FUNCTION_NAME} to a function",
        ) as ctx:
            ...
            ctx.logger.info(
                "Created function: %s", f"{json.dumps(function_data['data'], indent=2)}"
            )

Copy link
Member

@sanderegg sanderegg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, there are a few things I would suggest:

  • no need to wait so much everywhere with timeouts in the production test (it shall run as fast as possible) if there is no functional need (click already has an integrated 30 seconds default timeout)
  • mmux-testid this is very nice! please check the locations where you do not use it as this is fragile and will for sure break as soon as you modify the UI
  • ideally after the test everything should be deleted and cleaned so we do not accumulate e2e data (and actually pay for it)

@sonarqubecloud
Copy link

@alexpargon alexpargon removed this from the Cheops milestone Nov 12, 2025
@alexpargon alexpargon added this to the Imparable milestone Nov 12, 2025
@sonarqubecloud
Copy link

Copy link
Member

@pcrespov pcrespov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx

page.keyboard.press("Tab")
page.wait_for_timeout(1000)

if "moga" in service_key.lower():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MINOR: highlight these keywords by adding them as CONSTANTS e.g.

EXPECTED_SERVICE_KEY

Copy link
Member

@sanderegg sanderegg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

e2e Bugs found by or related to the end-2-end testing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

E2E testing of MMUX services

5 participants